Thousands of DebitCredit Transactions-Per-Second: Easy and Inexpensive
نویسندگان
چکیده
A $2k computer can execute about 8k transactions per second. This is 80x more than one of the largest US bank’s 1970’s traffic – it approximates the total US 1970’s financial transaction volume. Very modest modern computers can easily solve yesterday’s problems. 1. A Thousand-Transactions-per-second was once difficult and expensive. In 1973, Bank of America wanted to convert their paper-based branches, tellers, and demand-deposit (savings) accounts to an online system, letting tellers perform a customer’s deposits and withdrawals. The corresponding transaction profile, called DebitCredit, evolved to become a standard measure of transaction processing [Serlin]. At the time, the system of ten thousand tellers needed to perform 100 transactions per second. The ten million account records were about 1GB and the 90-day general ledger was about 4GB. At the time, the server hardware for such a system cost more than ten million dollars; but, it was not until 1976 that a commercial database system was able to run 100 transactions per second [Gawlick]. A decade later, Tandem used a 34-CPU 86-disk SQL system costing ten million dollars to process 208transactions per second. At the time, this was considered a breakthrough because relational systems had a reputation for poor performance [Tandem]. For much of the 1980’s the database and transaction processing performance agenda was to achieve a thousand transactions per second. Part of that process defined the “one-transaction per second” unit. Informal definitions [Datamation], [1Ktps] and bench-marketing eventually led to the formation of the Transaction Processing Performance Council (www.tpc.org) which defined the TPC-A transaction profile largely in line with DebitCredit [Serlin]. By early 1990 several database systems had achieved the 1,000 tps milestone. By the late 1990’s, clusters of 100 machines were delivering over 10,000 tpsA [Scalability]. Long before then, TPCA was replaced by the more challenging TPC-C benchmark [TPC-C], [Levine]. TPC-C had a similar experience. The early systems delivered 1k tpmC at 2000$/tpmC, today systems are delivering about to 3M tpmC for about 5$/tpmC.
منابع مشابه
Benchmarking Object Databases: Past, Present & Future
Developing performance metrics for any database system is always a difficult task. Generally, a benchmark is designed to be representative of the characteristics and load of a "real" application, or to test some specific components of a DBMS. An example of the former is DebitCredit (Anon et al. 1985) which was designed to measure the Transactions Per Second (TPS) performance (or system throughp...
متن کاملDomus Tower Blockchain (DRAFT) March 28, 2016
The purpose of this work is to demonstrate a fast, efficient, highly scalable blockchain. Current blockchain implementations have performance limitations that make them unsuitable for high speed record keeping. For example, the Bitcoin blockchain has a global maximum sustained transaction rate of 7tps [1][2]. Other blockchain implementations have maximum transaction rates in the hundreds or pos...
متن کاملBlockclique: scaling blockchains through transaction sharding in a multithreaded block graph
Crypto-currencies based on the blockchain architecture cannot scale to thousands of transactions per second. We design a new cryptocurrency architecture, called the blockclique, combining transaction sharding, where transactions are separated into multiple groups based on their input address, and a multithreaded directed acyclic block graph structure, where each block references one previous bl...
متن کاملTipping Pennies ? Privately . Practical Anonymous
We design and analyze the first practical anonymous payment mechanisms for network services. We start by reporting on our experience with the implementation of a routing micropayment solution for Tor. We then propose micropayment protocols of increasingly complex requirements for networked services, such as p2p or cloud-hosted services. The solutions are efficient, with bandwidth and latency ov...
متن کاملEstimating Bottlenecks of Very Large Models
Queueing theory has been extensively used since 70’s to carry out performance evaluations of complex computer systems. However, handling the complexity of modern computer networks with thousands of servers and millions of customers is still a challenge. Indeed, although the identification of product-form queueing networks has promoted the development of computationally tractable exact solution ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- CoRR
دوره abs/cs/0701161 شماره
صفحات -
تاریخ انتشار 2005